Remote sensing imagery provides comprehensive views of the Earth, where different sensors collect complementary data at different spatial scales. Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a $5.0\%$ non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a $0.9$ mIoU to $3.8$ mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales.
translated by 谷歌翻译
Remote sensing images are useful for a wide variety of environmental and earth monitoring tasks, including tracking deforestation, illegal fishing, urban expansion, and natural disasters. The earth is extremely diverse -- the amount of potential tasks in remote sensing images is massive, and the sizes of features range from several kilometers to just tens of centimeters. However, creating generalizable computer vision methods is a challenge in part due to the lack of a large-scale dataset that captures these diverse features for many tasks. In this paper, we present Satlas, a remote sensing dataset and benchmark that is large in both breadth, featuring all of the aforementioned applications and more, as well as scale, comprising 290M labels under 137 categories and seven label modalities. We evaluate eight baselines and a proposed method on Satlas, and find that there is substantial room for improvement in addressing research challenges specific to remote sensing, including processing image time series that consist of images from very different types of sensors, and taking advantage of long-range spatial context. We also find that pre-training on Satlas substantially improves performance on downstream tasks with few labeled examples, increasing average accuracy by 16% over ImageNet and 5% over the next best baseline.
translated by 谷歌翻译
在过去的十年中,修剪神经网络已经流行,当时证明可以安全地从现代神经网络中安全地删除大量权重,而不会损害准确性。从那时起,已经提出了许多修剪方法,每种方法都比以前更好。如今,许多最先进的技术(SOTA)技术依赖于使用重要性得分的复杂修剪方法,通过反向传播获得反馈或在其他等方面获得基于启发式的修剪规则。我们质疑这种引入复杂性的模式,以获得更好的修剪结果。我们对这些SOTA技术基准针对全球幅度修剪(全球MP)(一个天真的修剪基线),以评估是否确实需要复杂性来实现更高的性能。全球MP按其幅度顺序排列权重,并修理最小的权重。因此,它以香草形式是最简单的修剪技术之一。令人惊讶的是,我们发现香草全球MP的表现优于所有其他SOTA技术,并取得了新的SOTA结果。它还可以在拖叉稀疏方面取得良好的性能,当以逐渐修剪的方式进行修剪时,我们发现这是增强的。我们还发现,全球MP在具有卓越性能的任务,数据集和模型之间可以推广。此外,许多修剪算法以高稀疏速率遇到的一个常见问题,即可以通过设置要保留在每层中的最小权重阈值来轻松固定在全球MP中。最后,与许多其他SOTA技术不同,全球MP不需要任何其他特定算法的超参数,并且非常简单地调整和实施。我们在各种模型(WRN-28-8,Resnet-32,Resnet-50,Mobilenet-V1和FastGrnn)和多个数据集(CIFAR-10,Imagenet和HAR-2)上展示了我们的发现。代码可在https://github.com/manasgupta-1/globalmp上找到。
translated by 谷歌翻译
准确地估算主要山区盆地中的积雪对于水资源经理来说至关重要,以便做出影响当地和全球经济,野生动植物和公共政策的决策。目前,此估计需要多个配备LIDAR的飞机飞行或原位测量值,两者均昂贵,稀疏和对可访问区域有偏见。在本文中,我们证明了来自多个,公开可用的卫星和天气数据源的空间和时间信息的融合,可以估算关键山区的积雪。我们的多源模型的表现优于单源估计值5.0英寸RMSE,并且优于稀疏的原位测量值的估计值1.2英寸RMSE。
translated by 谷歌翻译
全世界不可持续的捕鱼实践对海洋资源和生态系统构成了重大威胁。识别逃避监测系统的船只(称为“深色船只”)是管理和保护海洋环境健康的关键。随着基于卫星的合成孔径雷达(SAR)成像和现代机器学习(ML)的兴起,现在可以在全天候条件下白天或黑夜自动检测到黑暗的容器。但是,SAR图像需要特定于域的治疗,并且ML社区无法广泛使用。此外,对象(船只)是小而稀疏的,具有挑战性的传统计算机视觉方法。我们提出了用于训练ML模型的最大标记数据集,以检测和表征SAR的血管。 XView3-SAR由Sentinel-1任务中的近1,000张分析SAR图像组成,平均每个29,400 x-24,400像素。使用自动化和手动分析的组合对图像进行注释。每个SAR图像都伴随着共置的测深和风状射手。我们概述了XView3计算机视觉挑战的结果,这是一项国际竞争,使用XView3-SAR进行大规模的船舶检测和表征。我们发布数据(https://iuu.xview.us/)和代码(https://github.com/diux-xview),以支持该重要应用程序的ML方法的持续开发和评估。
translated by 谷歌翻译
Existing federated classification algorithms typically assume the local annotations at every client cover the same set of classes. In this paper, we aim to lift such an assumption and focus on a more general yet practical non-IID setting where every client can work on non-identical and even disjoint sets of classes (i.e., client-exclusive classes), and the clients have a common goal which is to build a global classification model to identify the union of these classes. Such heterogeneity in client class sets poses a new challenge: how to ensure different clients are operating in the same latent space so as to avoid the drift after aggregation? We observe that the classes can be described in natural languages (i.e., class names) and these names are typically safe to share with all parties. Thus, we formulate the classification problem as a matching process between data representations and class representations and break the classification model into a data encoder and a label encoder. We leverage the natural-language class names as the common ground to anchor the class representations in the label encoder. In each iteration, the label encoder updates the class representations and regulates the data representations through matching. We further use the updated class representations at each round to annotate data samples for locally-unaware classes according to similarity and distill knowledge to local models. Extensive experiments on four real-world datasets show that the proposed method can outperform various classical and state-of-the-art federated learning methods designed for learning with non-IID data.
translated by 谷歌翻译
The rise in data has led to the need for dimension reduction techniques, especially in the area of non-scalar variables, including time series, natural language processing, and computer vision. In this paper, we specifically investigate dimension reduction for time series through functional data analysis. Current methods for dimension reduction in functional data are functional principal component analysis and functional autoencoders, which are limited to linear mappings or scalar representations for the time series, which is inefficient. In real data applications, the nature of the data is much more complex. We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder, that uses continuous hidden layers consisting of continuous neurons to learn the structure inherent in functional data, which addresses the aforementioned concerns in the existing approaches. Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed. The effectiveness of the proposed model is demonstrated through multiple simulations and real data examples.
translated by 谷歌翻译
Landing an unmanned aerial vehicle unmanned aerial vehicle (UAV) on top of an unmanned surface vehicle (USV) in harsh open waters is a challenging problem, owing to forces that can damage the UAV due to a severe roll and/or pitch angle of the USV during touchdown. To tackle this, we propose a novel model predictive control (MPC) approach enabling a UAV to land autonomously on a USV in these harsh conditions. The MPC employs a novel objective function and an online decomposition of the oscillatory motion of the vessel to predict, attempt, and accomplish the landing during near-zero tilt of the landing platform. The nonlinear prediction of the motion of the vessel is performed using visual data from an onboard camera. Therefore, the system does not require any communication with the USV or a control station. The proposed method was analyzed in numerous robotics simulations in harsh and extreme conditions and further validated in various real-world scenarios.
translated by 谷歌翻译
Multiple studies have focused on predicting the prospective popularity of an online document as a whole, without paying attention to the contributions of its individual parts. We introduce the task of proactively forecasting popularities of sentences within online news documents solely utilizing their natural language content. We model sentence-specific popularity forecasting as a sequence regression task. For training our models, we curate InfoPop, the first dataset containing popularity labels for over 1.7 million sentences from over 50,000 online news documents. To the best of our knowledge, this is the first dataset automatically created using streams of incoming search engine queries to generate sentence-level popularity annotations. We propose a novel transfer learning approach involving sentence salience prediction as an auxiliary task. Our proposed technique coupled with a BERT-based neural model exceeds nDCG values of 0.8 for proactive sentence-specific popularity forecasting. Notably, our study presents a non-trivial takeaway: though popularity and salience are different concepts, transfer learning from salience prediction enhances popularity forecasting. We release InfoPop and make our code publicly available: https://github.com/sayarghoshroy/InfoPopularity
translated by 谷歌翻译
The ability for an agent to continuously learn new skills without catastrophically forgetting existing knowledge is of critical importance for the development of generally intelligent agents. Most methods devised to address this problem depend heavily on well-defined task boundaries, and thus depend on human supervision. Our task-agnostic method, Self-Activating Neural Ensembles (SANE), uses a modular architecture designed to avoid catastrophic forgetting without making any such assumptions. At the beginning of each trajectory, a module in the SANE ensemble is activated to determine the agent's next policy. During training, new modules are created as needed and only activated modules are updated to ensure that unused modules remain unchanged. This system enables our method to retain and leverage old skills, while growing and learning new ones. We demonstrate our approach on visually rich procedurally generated environments.
translated by 谷歌翻译